In this paper, we introduce MINTIME, a video deepfake detection approach that captures spatial and temporal anomalies and handles instances of multiple people in the same video and variations in face sizes. Previous approaches disregard such information either by using simple a-posteriori aggregation schemes, i.e., average or max operation, or using only one identity for the inference, i.e., the largest one. On the contrary, the proposed approach builds on a Spatio-Temporal TimeSformer combined with a Convolutional Neural Network backbone to capture spatio-temporal anomalies from the face sequences of multiple identities depicted in a video. This is achieved through an Identity-aware Attention mechanism that attends to each face sequence independently based on a masking operation and facilitates video-level aggregation. In addition, two novel embeddings are employed: (i) the Temporal Coherent Positional Embedding that encodes each face sequence's temporal information and (ii) the Size Embedding that encodes the size of the faces as a ratio to the video frame size. These extensions allow our system to adapt particularly well in the wild by learning how to aggregate information of multiple identities, which is usually disregarded by other methods in the literature. It achieves state-of-the-art results on the ForgeryNet dataset with an improvement of up to 14% AUC in videos containing multiple people and demonstrates ample generalization capabilities in cross-forgery and cross-dataset settings. The code is publicly available at https://github.com/davide-coccomini/MINTIME-Multi-Identity-size-iNvariant-TIMEsformer-for-Video-Deepfake-Detection.
translated by 谷歌翻译
图像文本匹配是在涉及对视觉和语言的共同理解的任务中发挥领导作用。在文献中,此任务通常被用作培训能够共同处理图像和文本的架构的预训练目标。但是,它具有直接的下游应用程序:跨模式检索,其中包括查找与给定查询文本或反之亦然相关的图像。解决此任务对于跨模式搜索引擎至关重要。许多最近的方法提出了针对图像文本匹配问题的有效解决方案,主要是使用最近的大型视觉语言(VL)变压器网络。但是,这些模型通常在计算上很昂贵,尤其是在推理时间。这样可以防止他们在大规模的跨模式检索场景中采用,几乎应该立即向用户提供结果。在本文中,我们建议通过提出对齐和提炼网络(Aladin)来填补有效性和效率之间的空白。阿拉丁首先通过在细粒度的图像和文本上对齐来产生高效的分数。然后,它通过提炼从细粒对齐方式获得的相关性分数来提炼共享的嵌入空间 - 可以进行有效的KNN搜索。我们在MS-Coco上取得了显着的结果,表明我们的方法可以与最先进的VL变形金刚竞争,同时快了近90倍。复制我们结果的代码可在https://github.com/mesnico/aladin上获得。
translated by 谷歌翻译
深度神经网络的学习算法通常基于有误后传播(BackProp)的监督端到端随机梯度下降(SGD)培训。 Backprop算法需要大量标记的训练样本才能获得高性能。但是,在许多现实的应用中,即使有很多图像样本,很少有标签被标记,并且必须使用半监督的样品培训策略。 Hebbian学习代表了一种可能采取样本培训的方法;但是,在当前解决方案中,它不能很好地扩展到大型数据集。在本文中,我们提出了FastheBB,这是HEBBIAN学习的有效且可扩展的解决方案,通过1)合并在一批输入上更新计算和聚集,以及2)利用有效的GPU上的有效矩阵乘法算法。在半监督的学习方案中,我们在不同的计算机视觉基准测试方面验证了我们的方法。 FastheBB在训练速度方面最多优于先前的解决方案,尤其是,我们首次能够将HEBBIAN算法带入ImageNet量表。
translated by 谷歌翻译
深层生成技术正在快速发展,使创建现实的操纵图像和视频并危及现代社会的宁静成为可能。新技术的持续出现带来了一个要面对的另一个问题,即DeepFake检测模型及时更新自己的能力,以便能够使用最新方法识别进行的操作。这是一个非常复杂的问题,因为训练一个模型需要大量数据,如果深层生成方法过于最近,这很难获得。此外,不断地重新训练网络是不可行的。在本文中,我们问自己,在各种深度学习技术中,是否有一个能够概括深层的概念,以至于它不会与培训中使用的一种或多种或多种特定的深层捕获方法息息相关。放。我们将视觉变压器与基于伪造网络数据集的跨性别环境中的有效NETV2进行了比较。从我们的实验中,有效的NETV2具有更大的专业趋势,通常会在训练方法上获得更好的结果,而视觉变压器具有卓越的概括能力,即使在使用新方法生成的图像上也使它们更有能力。
translated by 谷歌翻译
虽然卷积神经网络(CNNS)在许多愿景任务中显示出显着的结果,但它们仍然是通过简单但具有挑战性的视觉推理问题所紧张的。在计算机视觉中最近的变压器网络成功的启发,在本文中,我们介绍了经常性视觉变压器(RVIT)模型。由于经常性连接和空间注意在推理任务中的影响,该网络实现了来自SVRT数据集的同样不同视觉推理问题的竞争结果。空间和深度尺寸中的重量共享正规化模型,允许它使用较少的自由参数学习,仅使用28K培训样本。全面的消融研究证实了混合CNN +变压器架构的重要性和反馈连接的作用,其迭代地细化内部表示直到获得稳定的预测。最后,本研究可以更深入地了解对求解视觉抽象推理任务的注意力和经常性联系的作用。
translated by 谷歌翻译
异常在所有科学领域都无处不在,并且由于对数据分布的不完整知识或突然进入发挥和扭曲观测的未知过程,因此可以表达意外事件。由于此类事件“稀有性,培训对异常检测(广告)任务的深入学习模型,科学家仅依赖于”正常“数据,即非异常样本。因此,让神经网络推断输入数据下方的分布。在这种情况下,我们提出了一种小说框架,名为多层单级分类(MOCCA),在广告任务中培训和测试深入学习模型。具体来说,我们将它应用于AutoEncoders。我们工作中的一个关键新颖性源于明确优化广告任务的中间陈述。实际上,与常用方法不同,将神经网络视为单个计算块,即,仅使用最后一层的输出,MOCCA明确地利用了深度架构的多层结构。每个层的特征空间在训练期间针对广告进行了优化,而在测试阶段,从训练的层提取的深表示混合以检测异常。使用Mocca,我们将培训过程分为两个步骤。首先,AutoEncoder仅在重建任务上培训。然后,我们只保留编码器任务,以最小化输出表示和参考点之间的L_2距离,在每个考虑的层上都是无异常的训练数据质心。随后,我们将在编码器模型的各种训练层中提取的深度特征组合以检测推理时间的异常。为了评估使用MOCCA培训的模型的性能,我们对公共数据集进行了广泛的实验。我们表明,我们的拟议方法对文献中可用的最先进的方法达到了可比或卓越的性能。
translated by 谷歌翻译
Computational units in artificial neural networks follow a simplified model of biological neurons. In the biological model, the output signal of a neuron runs down the axon, splits following the many branches at its end, and passes identically to all the downward neurons of the network. Each of the downward neurons will use their copy of this signal as one of many inputs dendrites, integrate them all and fire an output, if above some threshold. In the artificial neural network, this translates to the fact that the nonlinear filtering of the signal is performed in the upward neuron, meaning that in practice the same activation is shared between all the downward neurons that use that signal as their input. Dendrites thus play a passive role. We propose a slightly more complex model for the biological neuron, where dendrites play an active role: the activation in the output of the upward neuron becomes optional, and instead the signals going through each dendrite undergo independent nonlinear filterings, before the linear combination. We implement this new model into a ReLU computational unit and discuss its biological plausibility. We compare this new computational unit with the standard one and describe it from a geometrical point of view. We provide a Keras implementation of this unit into fully connected and convolutional layers and estimate their FLOPs and weights change. We then use these layers in ResNet architectures on CIFAR-10, CIFAR-100, Imagenette, and Imagewoof, obtaining performance improvements over standard ResNets up to 1.73%. Finally, we prove a universal representation theorem for continuous functions on compact sets and show that this new unit has more representational power than its standard counterpart.
translated by 谷歌翻译
Background: Image analysis applications in digital pathology include various methods for segmenting regions of interest. Their identification is one of the most complex steps, and therefore of great interest for the study of robust methods that do not necessarily rely on a machine learning (ML) approach. Method: A fully automatic and optimized segmentation process for different datasets is a prerequisite for classifying and diagnosing Indirect ImmunoFluorescence (IIF) raw data. This study describes a deterministic computational neuroscience approach for identifying cells and nuclei. It is far from the conventional neural network approach, but it is equivalent to their quantitative and qualitative performance, and it is also solid to adversative noise. The method is robust, based on formally correct functions, and does not suffer from tuning on specific data sets. Results: This work demonstrates the robustness of the method against the variability of parameters, such as image size, mode, and signal-to-noise ratio. We validated the method on two datasets (Neuroblastoma and NucleusSegData) using images annotated by independent medical doctors. Conclusions: The definition of deterministic and formally correct methods, from a functional to a structural point of view, guarantees the achievement of optimized and functionally correct results. The excellent performance of our deterministic method (NeuronalAlg) to segment cells and nuclei from fluorescence images was measured with quantitative indicators and compared with those achieved by three published ML approaches.
translated by 谷歌翻译
The broad usage of mobile devices nowadays, the sensitiveness of the information contained in them, and the shortcomings of current mobile user authentication methods are calling for novel, secure, and unobtrusive solutions to verify the users' identity. In this article, we propose TypeFormer, a novel Transformer architecture to model free-text keystroke dynamics performed on mobile devices for the purpose of user authentication. The proposed model consists in Temporal and Channel Modules enclosing two Long Short-Term Memory (LSTM) recurrent layers, Gaussian Range Encoding (GRE), a multi-head Self-Attention mechanism, and a Block-Recurrent structure. Experimenting on one of the largest public databases to date, the Aalto mobile keystroke database, TypeFormer outperforms current state-of-the-art systems achieving Equal Error Rate (EER) values of 3.25% using only 5 enrolment sessions of 50 keystrokes each. In such way, we contribute to reducing the traditional performance gap of the challenging mobile free-text scenario with respect to its desktop and fixed-text counterparts. Additionally, we analyse the behaviour of the model with different experimental configurations such as the length of the keystroke sequences and the amount of enrolment sessions, showing margin for improvement with more enrolment data. Finally, a cross-database evaluation is carried out, demonstrating the robustness of the features extracted by TypeFormer in comparison with existing approaches.
translated by 谷歌翻译
Digital media have enabled the access to unprecedented literary knowledge. Authors, readers, and scholars are now able to discover and share an increasing amount of information about books and their authors. Notwithstanding, digital archives are still unbalanced: writers from non-Western countries are less represented, and such a condition leads to the perpetration of old forms of discrimination. In this paper, we present the Under-Represented Writers Knowledge Graph (URW-KG), a resource designed to explore and possibly amend this lack of representation by gathering and mapping information about works and authors from Wikidata and three other sources: Open Library, Goodreads, and Google Books. The experiments based on KG embeddings showed that the integrated information encoded in the graph allows scholars and users to be more easily exposed to non-Western literary works and authors with respect to Wikidata alone. This opens to the development of fairer and effective tools for author discovery and exploration.
translated by 谷歌翻译